98 research outputs found
Designing interactive systems that embrace uncertainty
Traditionally, computer interfaces have relied on deterministic input (e.g. keyboard and mouse) and plentiful feedback (e.g. visual displays and audio). Today, computer interaction is often off-desktop, performed in diverse environments with diverse devices. This often requires interaction via uncertain input methods such as speech recognition, touchscreen gestures, mid-air gestures, or eye-tracking. Interaction becomes even more challenging when a user\u27s input or output capabilities are limited by situation or disability. In this talk, I outline our efforts to create interfaces that are efficient, pleasant, and accessible despite uncertain input and/or limited output.https://digitalcommons.mtu.edu/techtalks/1045/thumbnail.jp
FlexType: Flexible Text Input with a Small Set of Input Gestures
In many situations, it may be impractical or impossible to enter text by selecting precise locations on a physical or touchscreen keyboard. We present an ambiguous keyboard with four character groups that has potential applications for eyes-free text entry, as well as text entry using a single switch or a brain-computer interface. We develop a procedure for optimizing these character groupings based on a disambiguation algorithm that leverages a long-span language model. We produce both alphabetically-constrained and unconstrained character groups in an offline optimization experiment and compare them in a longitudinal user study. Our results did not show a significant difference between the constrained and unconstrained character groups after four hours of practice. As expected, participants had significantly more errors with the unconstrained groups in the first session, suggesting a higher barrier to learning the technique. We therefore recommend the alphabetically-constrained character groups, where participants were able to achieve an average entry rate of 12.0 words per minute with a 2.03% character error rate using a single hand and with no visual feedback
Understanding Adoption Barriers to Dwell-Free Eye-Typing: Design Implications from a Qualitative Deployment Study and Computational Simulations
Eye-typing is a slow and cumbersome text entry method typically used by individuals with no other practical means of communication. As an alternative, prior HCI research has proposed dwell-free eye-typing as a potential improvement that eliminates time-consuming and distracting dwell-timeouts. However, it is rare that such research ideas are translated into working products. This paper reports on a qualitative deployment study of a product that was developed to allow users access to a dwell-free eye-typing research solution. This allowed us to understand how such a research solution would work in practice, as part of users\u27 current communication solutions in their own homes. Based on interviews and observations, we discuss a number of design issues that currently act as barriers preventing widespread adoption of dwell-free eye-typing. The study findings are complemented with computational simulations in a range of conditions that were inspired by the findings in the deployment study. These simulations serve to both contextualize the qualitative findings and to explore quantitative implications of possible interface redesigns. The combined analysis gives rise to a set of design implications for enabling wider adoption of dwell-free eye-typing in practice
Recommended from our members
A parallel implementation of a fluid flow simulation using smoothed particle hydrodynamics
The reaction of fluid or gas flowing around an obstacle is a common engineering problem. Computer simulations are often used to measure and visualize the physical processes involved. In this report, we will discuss a parallel implementation of a simulation using the Smooth Particle Hydrodynamics (SPH) approach to fluid flow. We will discuss the design decisions made during development, problems encountered during implementation, and parallel performance issues.
Section 2 will introduce the fill.id flow problem, the SPH technique, and our project motivations. Section 3 will discuss our first O(N²) serial version of the simulator. Section 4 covers our improved O(N) grid-based serial version. In section 5, we will develop the MPI parallel simulation, discuss debugging issues and provide results from our performance tuning. Section 6 describes our Java application client used to control the simulation. In section 7, the Java/C++ socket library used for communication with the client is discussed. We will conclude in section 8 with an evaluation of oar successes and failures during the project and look at future directions for this work
Inviscid text entry and beyond
The primary focus of our workshop is on exploring ways to enable inviscid text entry on mobile devices. In inviscid text entry, it is the user's creativity that is the text-creation bottleneck rather than the text entry interface. The inviscid rate is estimated at 67 wpm while current mobile text entry methods are typically 20-40 wpm. In this workshop, participants will discuss and demonstrate early work into novel methods that allow very rapid text entry, even if such methods currently are quite error-prone. In addition to submitting a position paper, participants are strongly encouraged to bring a demo to present during the workshop's interactive Show-and-Tell session. As well as exploring new entry methods, the workshop will discuss experimental tasks and evaluation methodologies for researching inviscid text entry. Looking beyond the speed of entry, the workshop will explore often overlooked aspects of text entry such as user adaptation, post-entry correction/revision/formatting, entry of diverse types of text, and entry when a user's input or output capabilities are limited. Finally, the workshop serves to strengthen the community of text entry researchers who attend CHI, as well as provide an opportunity for new members to join this community
Ubiquitous text interaction
Computer-based interactions increasingly pervade our everyday environments. Be it on a mobile device, a wearable device, a wall-sized display, or an augmented reality device, interactive systems often rely on the consumption, composition, and manipulation of text. The focus of this workshop is on exploring the problems and opportunities of text interactions that are embedded in our environments, available all the time, and used by people who may be constrained by device, situation, or disability. This workshop welcomes all researchers interested in interactive systems that rely on text input or output. Participants should submit a short position statement outlining their background, past work, future plans, and suggesting a use-case they would like to explore in-depth during the workshop. During the workshop, small teams will form around common or compelling use-cases. Teams will spend time brainstorming, creating low-fidelity prototypes, and discussing their use-case with the group. Participants may optionally submit a technical paper for presentation as part of the workshop program. The workshop serves to sustain and build the community of text entry researchers who attend CHI. It provides an opportunity for new members to join this community, soliciting feedback from experts in a small and supportive environment
A Design Engineering Approach for Quantitatively Exploring Context-Aware Sentence Retrieval for Nonspeaking Individuals with Motor Disabilities
Nonspeaking individuals with motor disabilities typically have
very low communication rates. This paper proposes a design
engineering approach for quantitatively exploring contextaware
sentence retrieval as a promising complementary input
interface, working in tandem with a word-prediction keyboard.
We motivate the need for complementary design engineering
methodology in the design of augmentative and alternative
communication and explain how such methods can be used to
gain additional design insights. We then study the theoretical
performance envelopes of a context-aware sentence retrieval
system, identifying potential keystroke savings as a function of
the parameters of the subsystems, such as the accuracy of the
underlying auto-complete word prediction algorithm and the
accuracy of sensed context information under varying assumptions.
We find that context-aware sentence retrieval has the
potential to provide users with considerable improvements in
keystroke savings under reasonable parameter assumptions of
the underlying subsystems. This highlights how complementary
design engineering methods can reveal additional insights
into design for augmentative and alternative communication
Comparing Smartphone Speech Recognition and Touchscreen Typing for Composition and Transcription
International audienceRuan et al. found transcribing short phrases with speech recognition nearly 200% faster than typing on a smartphone. We extend this comparison to a novel composition task, using a protocol that enables a controlled comparison with transcription. Results show that both composing and transcribing with speech is faster than typing. But, the magnitude of this difference is lower with composition, and speech has a lower error rate than keyboard during composition, but not during transcription. When transcribing, speech outperformed typing in most NASA-TLX measures, but when composing, there were no significant differences between typing and speech for any measure except physical demand
- …